Skip to main content

NFS Mounts

What are the NFS Mounts?

The problem

Researchers use SLURM to schedule complex, large-scale computational tasks. For example, a researcher may need 1000 cores for 12 hours to run a computation, after which SLURM will terminate the program to prevent resource abuse. A special drive is attached and made widely available to all computers where SLURM programs run. Users know where everything is located (for example: Jupyter Notebooks, Python, etc.) and can access the data. As a result, the data is exposed.

The solution

The NFS (Network File System) mounts mechanism helps researchers manage the SLURM process in a controlled way, keeping all data hidden and encrypted from other users during computation.

The information resides in the NFS mounts outside tiCrypt and is made available inside the VMs as read-only.

This simplifies large data management. As a system admin, you configure it once, and everyone you authorize has access to the information once it reaches the VM.

This fits the workflow since every system deployment is different and not linked to any central "mothership."
As a result, tiCrypt keeps no central repository of user information, making the system fully decentralized.

Do Users Open Up Access to Repository Hosting Platforms in Practice, or Do They Mirror Elements in the tiCrypt Secure Cloud and Point Updates to an Institution-Hosted Repository?

For repositories (e.g., CRAN, PyPI), deployments typically mirror the repositories locally within the environment and mount them as read-only to the VMs. By mirroring repositories, the security boundary remains intact, and the institution retains full control over the NFS mount point.

tiCrypt does not support direct GitHub communication for security reasons. Instead, Git, GitLab, or Gitea can be installed on the VM image to provide users with a local git environment for project work. This local instance is only accessible within the VM, ensuring compliance, revision control, and security.